7 research outputs found

    HiddenGazeStereo: Hiding Gaze-Contingent Disparity Remapping for 2D-Compatible Natural 3D Viewing

    Get PDF
    Stereoscopic 3D displays (S3D), the most popular consumer display devices for 3D presentation, have a few problems that degrade the natural visual experience, such as unnatural relationships between eye vergence and accommodation, and severe image blurring (ghost) for viewers without stereo glasses. To simultaneously solve these problems, we combine gaze-contingent disparity remapping with Hidden Stereo in a manner that mutually compensates for their respective shortcomings. Gaze-contingent disparity remapping can reduce the vergence-accommodation conflict by shifting the disparity distribution around the gaze position to be centered on the display plane. Hidden Stereo can synthesize 2D-compatible 3D stereo images that do not produce any ghosting artifacts when the images for the two eyes are linearly fused. Thus, by using our new gaze-contingent display, while one viewer with glasses enjoys natural 3D content, many other glassless viewers enjoy clear 2D content. To enable real-time synthesis, we accelerate Hidden Stereo conversion by limiting the processing to each horizontal scanline. Through a user study using a variety of 3D scenes, we demonstrate that Hidden Stereo can effectively hide disparity information to glassless viewers despite the dynamic disparity manipulations. Moreover, we show that our method can alleviate the limitation of Hidden Stereo --the narrow reproducible disparity range-- by manipulating the disparity so that the depth information around the gaze position is maximally preserved

    Occlusion Handling using Semantic Segmentation and Visibility-Based Rendering for Mixed Reality

    Full text link
    Real-time occlusion handling is a major problem in outdoor mixed reality system because it requires great computational cost mainly due to the complexity of the scene. Using only segmentation, it is difficult to accurately render a virtual object occluded by complex objects such as trees, bushes etc. In this paper, we propose a novel occlusion handling method for real-time, outdoor, and omni-directional mixed reality system using only the information from a monocular image sequence. We first present a semantic segmentation scheme for predicting the amount of visibility for different type of objects in the scene. We also simultaneously calculate a foreground probability map using depth estimation derived from optical flow. Finally, we combine the segmentation result and the probability map to render the computer generated object and the real scene using a visibility-based rendering method. Our results show great improvement in handling occlusions compared to existing blending based methods

    Reduction of contradictory partial occlusion in Mixed Reality by using characteristics of transparency perception

    No full text
    One of the challenges in mixed reality (MR) applications is handling contradictory occlusions between real and virtual objects. The previous studies have tried to solve the occlusion problem by extracting the foreground region from the real image. However, real-time occlusion handling is still difficult since it takes too much computational cost to precisely segment foreground regions in a complex scene. In this study, therefore, we proposed an alternative solution to the occlusion problem that does not require precise foreground-background segmentation. In our method, a virtual object is blended with a real scene so that the virtual object can be perceived as being behind the foreground region. For this purpose, we first investigated characteristics of human transparency perception in a psychophysical experiment. Then we made a blending algorithm applicable to real scenes based on the results of the experiment

    Slant-dependent image modulation for perceiving translucent objects

    No full text

    Psychophysical measurement of perceived motion flow of naturalistic scenes

    No full text
    Summary: The neural and computational mechanisms underlying visual motion perception have been extensively investigated over several decades, but little attempt has been made to measure and analyze, how human observers perceive the map of motion vectors, or optical flow, in complex naturalistic scenes. Here, we developed a psychophysical method to assess human-perceived motion flows using local vector matching and a flash probe. The estimated perceived flow for naturalistic movies agreed with the physically correct flow (ground truth) at many points, but also showed consistent deviations from the ground truth (flow illusions) at other points. Comparisons with the predictions of various computational models, including cutting-edge computer vision algorithms and coordinate transformation models, indicated that some flow illusions are attributable to lower-level factors such as spatiotemporal pooling and signal loss, while others reflect higher-level computations, including vector decomposition. Our study demonstrates a promising data-driven psychophysical paradigm for an advanced understanding of visual motion perception
    corecore